Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
1st Workshop on NLP for COVID-19 at the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020 ; 2020.
Article in English | Scopus | ID: covidwho-2272652

ABSTRACT

We present COVID-QA, a Question Answering dataset consisting of 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19. To evaluate the dataset we compared a RoBERTa base model fine-tuned on SQuAD with the same model trained on SQuAD and our COVID-QA dataset. We found that the additional training on this domain-specific data leads to significant gains in performance. Both the trained model and the annotated dataset have been open-sourced at: https://github.com/deepset-ai/COVID-QA. © ACL 2020.All right reserved.

2.
19th IEEE India Council International Conference, INDICON 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2271937

ABSTRACT

A large number of people search about their health related problems on the web. However, the number of sites with qualified and verified people answering their queries is quite low in comparison to the number of questions being put up. The rate of queries being searched on such sites has further increased due to the COVID-19 pandemic. The main reason people find it difficult to find solutions to their queries is due to ineffective identification of semantically similar questions in the medical domain. For most cases, answers to the queries people ask would be present, the only caveat being the question may be present in a different form than the one asked by the particular user. In this research, we propose a Siamese-based BERT model to detect similar questions using a fine-tuning approach. The network is fine-tuned with medical question-answer pairs and then with question-question pairs to get a better question similarity prediction. © 2022 IEEE.

3.
2022 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2022 ; : 3228-3234, 2022.
Article in English | Scopus | ID: covidwho-2237494

ABSTRACT

Medical Frequently Asked Question (FAQ) retrieval aims to find the most relevant question-answer pairs for a given user query, which is of great significance for enhancing people medical health awareness and knowledge. However, due to medical data privacy and labor-intensive labeling, there is a lack of large-scale question-matching training datasets. Previous methods directly use the collected question-answer pairs on search engines to train retrieval models, which achieved poor performance. Inspired by recent advances in contrastive learning, we propose a novel contrastive curriculum learning framework for modeling user medical queries. First, we design different data augmentation methods to generate positive samples and different types of negative samples. Second, we propose a curriculum learning strategy that associates difficulty levels with negative samples. Through a contrastive learning process from easy to hard, our method achieves excellent results on two medical datasets. © 2022 IEEE.

4.
7th IEEE International Conference on Network Intelligence and Digital Content, IC-NIDC 2021 ; : 304-308, 2021.
Article in English | Scopus | ID: covidwho-1704219

ABSTRACT

Frequently Asked Question (F AQ) retrieval is a valuable task which aims to find the most relevant question-answer pair from a FAQ dataset given a user query. Currently, most works implement F AQ retrieval considering the similarity between the query and the question as well as the relevance between the query and the answer. However, the query-answer relevance is difficult to model effectively due to the heterogeneity of query-answer pairs in terms of syntax and semantics. To alleviate this issue and improve retrieval performance, we propose a novel approach to consider answer information into F AQ retrieval by question generation, which provides high-quality synthetic positive training examples for dense retriever. Experiment results indicate that our method outperforms term-based BM25 and pretrained dense retriever significantly on two recently published COVID-19 F AQ datasets. © 2021 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL